Greg Detre
Thursday, 30 November, 2000
Mind VII, Snowdon
Our curiosity has focused on the
problem of Mind since ancient Greek thought. The confusion has been expressed
in many forms, revolving around the relation between the mind and body,
physical and non-physical, or subjective and objective. The question can be
turned on its head � �Are humans (merely) machines?� Seen in this light, the
fields of physics, physiology and computer science are key. Indeed,
technological progress has provided a succession of new paradigms through which
to try to see our brain-minds, and new mechanisms with which to try and create
artificial minds.
Certainly, our knowledge of the
workings of the brain has deepened, both at the neuronal level and at the macroscopic
systems level (benefiting from greater temporal and spatial resolution in
neuroimaging). At the same time, our understanding of the physical laws of our
universe has broadened, and become quite removed from anything we would
characterise as common-sensical or intuitive, especially at the quantum level.
Lastly, in computer science, AI and connectionist researchers have felt able to
make strong claims about the feasibility of building a machine that can think �
half a century later, and their credibility has suffered considerably. The
ambitions of both fields have since narrowed, along with the complexity of the
phenomena they seek to model.
One of the problems with talking
about mind is that it is, by definition, subjective. We cannot access others�
mental lives, except indirectly through language. We attempt to categorise our
inner world as words � broad, shifting categories, never sure whether the other
person is describing the same feeling when he uses a word, only that he uses it
in an appropriate way. The thought experiment of the inverted spectrum could
apply here. As a result, we have no means of identifying with certainty whether
another being has a mental life like our own. Yet, we assume other humans do,
because they behave similarly, they�re built similarly, and they talk about a
mental life in such a convincing fashion that it seems impossible to imagine
that they are zombies. However, the problem of other minds becomes even more
crippling when we look at animals, machines or rocks, who cannot communicate on
our terms, and whose behaviour isn�t so anthromorphic that we automatically
impose an assumption of consciousness upon them.
Answering the question of whether
machines can think comes down to two issues: what we mean by �thought�, and whether
we can be described as machines.
The term, �machine� has often been
used to describe something as contrasted with a being with free will, an
automaton. I am going to try and avoid discussion about determinism in physical
processes here, and simply define machines as being �constructed and containing
no living tissue�. We will return to the implications of such a definition
below.
As Bill Bulko puts it, �Artificial
intelligence is the art of making computers that behave like the ones in
movies�. But as we will see, this may be asking too much. Certainly, �thinking�
is intelligent mental activity; but that doesn�t really help us. In order to
know how hard a hurdle we are setting for machines, we need to place �thinking�
relative to the different standards of mentality that we seem to recognise.
This is especially difficult because we need to remember that non-human forms
of mentality will be very different to our own � we cannot simply disentangle
the �human� from �human mind� on the basis of just our own case. I am going to
try and set out a series of assertions which characterise thought, in
increasing order of the strength of their claims.
1. Newell and Simon[1]
defined intelligence as:
first,
specifying a goal; then
assessing a
current situation to see how it differs from the goal; and
applying a
set of operations that reduce the difference.
Thus,
the power of one�s thinking is a measure of:
the
accuracy and reach of one�s evaluations of the distance from current state to
goal;
the
flexibility of behaviour produced in response;
the range
of different situations in which the system can operate and learn to
operate in.
We regard
ourselves as the yardstick of complex thought because humans display extremely
flexible, long-term behaviour in a huge range of different situations, and so
we regard ourselves as thinking considerably better than our fragile,
insect-level robots.
2. Hume�s model of mind was separated into impressions (i.e.
sense-data) and ideas, our memories of impressions. Simple ideas relate
directly to an impression, while complex ideas are comprised of other ideas.
This explanation seems quite adequate when describing our idea of an angel (the
conjunction of two simple ideas: a figure of a man plus wings), or heaven
(various complex ideas - pearls plus gates, gold plus palaces etc.). Thinking
is seen as the categorising and associating mechanism by which we assemble
knowledge from experiencing.
If we
take this idea of association a little further, we have the foundations of
connectionism. Simply put, connectionism attempts to produce schematised models
of the brain at the neuronal level. The belief is that the brain itself can be
viewed as a machine, whose organisational complexity is such that it gives rise
to high-level behaviour as an emergent phenomena. Each neuron is itself a
biological machine of considerable intricacy, yet because the sub-neuronal
activity is considered to be of less importance than the connectivity between
the neurons, or �nodes�, their activity is modelled relatively simply, often as
little more than binary logic gates or perhaps a non-linear function with a
threshold. When even a small group of neurons are connected to each other with
pre-set or self-organised �synaptic weights�, they can approximate practically
any function, as well as robustly recognising and generalising patterns (i.e.
learning). Connnectionists hope that by retaining the criterion of biological
plausibility, they can learn the tricks that evolution has painstakingly
developed, and eventually leapfrog the brain, given sufficient processing
power.
Minsky
claims that a move from association to meaning is not as large a step as it
seems[2].
He uses the example of numbers as a case where we think we understand
numbers, where they appear to have meaning for us � yet, we�re unable to pin it
down. Russell and Whitehead�s somehat counter-intuitive attempts at defining
�five� as the �set of all possible sets with five members� ran into paradox and
inconsistency (though they were able to eventually overcome the fundamental
circularity). According to Minsky, this is largely because their rigid
definitions of number run contrary to the truth that meaning is relative � it
derives from relations with other things that are known. To try and define a
meaning in isolation, rather than as a piece of a� jigsaw of knowledge, is doomed. The more links a concept has, the
more meaning � hence the futility of searching for the �real� meaning of
something, as though there it was only connected to one other thing, then it
would scarcely �mean� at all:
Rich,
multiply-connected networks provide enough different ways to use knowledge that
when one way doesn't work, you can try to figure out why. When there are many
meanings in a network, you can turn things around in your mind and look at them
from different perspectives; when you get stuck, you can try another view.
That's what we mean by thinking!
3. We usually describe thoughts as �intentional�, or as having
propositional content, i.e. thoughts are always about something � they
are directed at an object. In functional terms, a representation can be seen as
something we can manipulate, a mental object that we can perform operations on
in our mind before trying them out in our physical environment. Connectionism
appears to demonstrate how a representation of the external world might be held
in distributed form in the synaptic connections between neurons. We can imagine
that many animals, especially higher primates, hold rudimentary inner
representations of the physical world � this would be essential in tool use,
for example. However, social animals, animals who live and cooperate in groups,
need to work even harder to model their environment because they need to
incorporate other agents. Such social cognition, having beliefs about others�
mental states, requires larger brains and more complex representations. When we
turn our capacity to represent other minds onto our own mind, we are
having higher order thoughts (HOTs)- thoughts about thoughts. This
reflexiveness is often regarded as defining �self-awareness�.
4. At this point, we start to have difficulty deciding which
of our faculties underpin each other. We know that language allows us to
express these higher order thoughts, to make them explicit so that they are
communicable. But we are unable to ask a baby or mute adult whether they
are having HOTs, so we cannot be sure whether we can have HOTs without
language. Similarly, it seems unlikely that one could produce human language
fully without HOTs. (However, chimps[3]
and parrots[4] have
demonstrated considerable facility at recognising, remembering and employing
linguistic symbols meaningfully, like �food�, �chase�, �green� and �more�, as
well as almost full comprehension of spoken English. They can identify and
refer to people, other animals and their own mood (I think), and spontaneously
produce novel constructions, but even the most successful studies have shown
practically no proper use of syntax.)
For
this reason and others, Alan Turing chose language as the vehicle for
discriminating and demonstrating intelligence in his imitation-game, the Turing
test. Indeed, passing the Turing test surely demonstrates considerable and very
human-like intelligence. Unfortunately, machines are a very long way from this
goal, despite the misleading results of restricted public experiments like
Tomorrow�s World and the Loebner Prize[5].
Part of the problem, and indeed the attraction, of the Turing test is that it
requires so much of a successful participant. They need a vast meaning-coded
lexicon, a knowledge of syntax that probably far exceeds the best contemporary
understanding of linguistics and facility with all of English�s exceptions,
nuances and clich� Moreover, since the domain of conversation could wander
from film to shopping to girlfriends, the machine would need to have
accumulated a knowledge base of the mundane, useful and quintessentially human
that takes us many years. Much of this knowledge is species-specific � what
sense could a computer make of hunger, love or pain, if its physiology is so
different to ours that it need not feel them? After all, though we regard them
as truths, they are truths of the human condition � we feel the way we do
because of the particular (arbitrary) way in which we have evolved. If we could
live on water, then hunger would be meaningless, and possibly inconceivable.
Almost all of our common sense and shared language is predicated on our shared
physiology, and for similar reasons that Wittgenstein denies that we could have
a private language, it seems that a public language understood by non-humans
would be very different from English, by virtue of having to be more objective
and so less expressive. The Turing test then could be described as a
sufficient, but not necessary condition for intelligence. Any machine that
passes will have to be intelligent, but probably considerably more intelligent
than us in order to pass a test couched so firmly in our terms � after all, no
human could pass a similar Turing test conducted in machine code or whalesong,
say.
5. Probably the ultimate requirement of a thinking machine is
true phenomenological consciousness. Thought always has a phenomenological
aspect for us � there is always something it is like for the human thinker to
have a thought. We assume that a truly intelligent machine would, of necessity,
know what it feels like to be itself.
The reason for spending so long
debating what we mean by �thinking� is that the answer to our question hinges
on our criterion of thought. If we take the strongest claim for thought, that
machines must be fluent English speakers, human-seeming, and (claim to) be
fully conscious, then truly intelligent machines are a long way off. But, I
think most people would be prepared to accept a less strong claim as
�thinking�. I would like to set the boundary around about the third or fourth
level � higher order thoughts, with some sort of means of expressing them.
Testing for such criteria will be difficult � either we will have to learn this
machine language, or machines would have to learn ours � which, as we have said,
may be an even harder requirement.
Before continuing, there are a
multitude of poor reasons that we should dismiss for being quite sure that
machines can�t think.
There is a (Christian) religious
argument against thinking machines, which could run on many lines. In
pre-Darwinian times, the Christian Church would have dismissed the possibility
of thinking machines out of hand. Humans were regarded as divinely created and
distinct from the animal �b� machines�, elevated by our souls, language and
free will. The Church had to substantially revise its assertions after the
theory of evolution became scientifically sacrosanct (although there are still
parts of Bible belt America where the congregation are held in Genesis�
thrall). Man and machine would now be considered divided by the
divinely-instated mechanism of evolution. It remains to be seen whether the
Church could credibly accept a machine consciousness into heaven. Any variation
on this theme which holds humans to be without doubt the only intelligent life
in the universe is equally unreasonable or would have to rely on some
quasi-religious unscientific premise or leap of faith � it is remotely possible
that humans are the only intelligent life in the universe, but there is no
scientific reason to suppose that this is necessarily so. There is nothing
special about humanity � we may be at the evolutionary pinnacle of our planet,
but our view over the rest of the cosmos is sorely limited. The second main
religious argument against intelligent machines is that it places us in a
somewhat god-like role � by intimating that scientists can understand the
workings of the highest orders of life, and even create new and �better� life,
reduces the divine miracle of life. This is unnacceptable for the same reason that
evolution was considered abhorrent � what is so god-like about God if we can
forge souls ourselves? Those with a more literary turn of mind might choose to
see parallels between AI and the Tree of Knowledge of Good and Evil in the
garden of Eden�
There is a much wider category of
people, branded carbon-chauvinists, who deem machines wholly incapable of
thought because they are not biologically alive. By this, I mean that machines
need not be carbon-based, or indeed contain any of the chemicals in our bodies
and their physiology is likely to be unrecognisably different. Furthermore,
machines are artificial � we will have designed and built them. They would not
exist �naturally�, i.e. had there not been other intelligent life to first
create them. They are evolutionarily separate from us and the entire animal and
plant kingdoms. In fact, to emphasise the differences, they could be altogether
virtual. The machines we could be talking about might be lines of code amongst
other lines of code, A-life programs running in a virtual environment on a
completely mindless supercomputer. Although they may behave and look like
simple life forms, virtual machines are even harder for carbon chauvinists to
accept than steel and silicon.
Without wanting to digress from
the topic of intelligence in machines, it might be worthwhile considering
briefly how strong the case for classing certain machines in existence today as
�alive�. Steen Rasmussen�s contentious pamphlet, �Aspects of Information, Life,
Reality and Physics�, distributed at the second A-life conference, contained
the following argument:
1. A universal computer is indeed universal and can emulate
any process (Turing)
2. The essence of life is a process (von Neumann)
3. There exist criteria by which we are able to distinguish
living from non-living things
Accepting
(1), (2) and (3) implies the possibility of life in a computer
4. If somebody manages to develop life in a computer
environment, which satisfied (3), it follows from (2) that those life-forms are
just as alive as you and I
5. Such an artificial organism must perceive a reality R2,
which for itself is just as real as our �real� reality R1 is for us.
6. From (5) we conclude that R1 and R2
have the same ontological status. Although R2 in a material way is
imbedded in R1, R2 is independent of R1.
If sound, this argument would
expose the fallacy of dividing life into the terrestrially evolved and not.
Clearly, the argument hinges on the first two premises, since we can devise
criteria of our own to satisfy the third premise � after all, it may be that
the notion of dividing the world into �alive� and �inanimate� is itself a
human, artificial distinction. The first premise is reasonable, if we allow the
machine an infinite amount of time. Although this sounds like quite a large requirement,
it is merely a nod to the difference between easy and hard computational
problems, i.e. those where we can solve them in finite time, and those where we
don�t know if we can solve them at all. If nature can compute a process, then a
computer (especially a massively parallel one) can too.� The second premise is the most important one
� if we bar God from the proceedings, then we have to look to scientific
criteria for defining life. We can easily imagine a machine that could fulfil
the following criteria:
consume
from its environment (by consuming and probably excreting)
respond to
stimuli
operate
autonomously (i.e. without control from another being)
reproduce
(and evolve)
grow
even if it did not:
excrete
respire
have DNA
�Excretion� is only necessary if
the consumption requires some transduction of energy. I have deliberately
avoided it, since a virtual machine would �consume� processor time and possibly
virtual food, but need not �excrete� in any real sense of the word. DNA and
respiration are clearly the hallmarks of creatures which have evolved from
common ancestry to live on this planet, and there is no reason to exclude all
non-DNA non-respiring extra-terrestrials from being alive. After all, it seems
likely that life in a Jovian atmosphere or a remote and remarkable planet would
have adapted to become quite different from what we are used to. The five main
criteria are all processes. So we can accept Rasmussen�s second premise, and
potentially accept machines into our hallowed hall of Life.
Computers are often described as
�mindless� and slavishly mechanical � this is true, insofar as they obey the
laws of physics to the letter, and that we are able to take advantage of our
knowledge of those laws to carry out specific processes. However, to use the
deterministic nature of computers today as evidence that they can never think
creatively, spontaneously or have free will seems rather strange. After all,
the human brain appears to obey the same laws of physics that machines do. If
we accept the materialistic premise, then we can see that there are some
machines (i.e. brains) which do have free will. The question of whether
machines can exercise free will in the same way becomes an empirical one of
�Can we discover whatever new physical understanding of matter is necessary to
realise how things work in the brain, and then incorporate this into machines?�
If, as Penrose and Swinburne argue, quantum mechanics plays a vital role in
psychophysical interaction, then we may need quantum computers before our machines
can think properly. But having said all this, it may be that we can have
intelligence without consciousness. Intelligence does not require free will in
the same way, though often intelligence has the appearance of spontaneity.
The last reason for stating with
certainty that machines could not be intelligent is processing power.
Comparisons are difficult because it is hard to estimate the processing power
of a single neuron relative to a computer. However, by comparing the processing
power of retinal ganglion cells (where the number of neurons is known) with
similar calculations on a computer, we can get an estimate of the brain�s
processing power. Current estimates[6]
place supercomputers about a decade behind the brain in processing power, but
potentially ahead in storage. Although Moore�s Law (that computers of a given
price will double in speed about every 18 months) has held for over three
decades, Intel will soon have to consider quantum effects � however, it seems
certain that the processing power available to researchers will ultimately grow
to exceed that of the brain. After all, �if the automobile had followed the
same development cycle as the computer, a Rolls-Royce would today cost $100,
get a million miles per gallon, and explode once a year, killing everyone
inside.�[7]
�
When asked �Can machines think?�,
one prominent computer scientist replied, �Can a submarine swim?� This would
seem to indicate that the question is ill-posed somehow, or that the answer is
a clear �no�. Yet, Minsky argues convincingly that it is unreasonable to claim
that there is any basic difference between the minds of men and possible
machines.
Indeed, as Danny Hillis notes,
because machines are artificial rather than evolved, they can be built in such
a way to allow them to re-wire, re-program and re-build themselves, and could
well be in an even better situation than we are and lead richer, happier mental
lives. This raises the ethical quandary of conscious machines. A machine
demanding rights would probably be another sufficient sign of intelligence,
raising the same issues as animal rights, but appearing even more alien and
chauvinist.
There is a historical trend away
from mysticism towards materialism. The mind remains as one of the very
greatest mysteries. Modern materialism appears to shred this shroud with its
claim � if you accept its premises, then fully intelligent machines should be
very possible. If you don�t accept materialism, then the question comes down to
whether something non-conscious can be �intelligent�, as opposed to merely
intelligent-seeming � a strange distinction.
[1] Newell, A. & Simon H. A., General Problem Solver, a program that simulates human thought, 1963
[2] Marvin Minsky, MIT, Why people think that computers can�t, 1982
[3] E. Sue Savage-Rumbaugh & Roger Lewin, Kanzi:
The ape at the brink of the human mind, 1994
[4] Pepperberg on �Grey Parrot Intelligence�, 1993, 1995
[5] Stuart M Shieber, Lessons from a Restricted Turing Test
[6] Daniel
Crevier, The tumultuous history of the search for Artificial Intelligence
[7] Robert X. Cringely, InfoWorld